Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Neurosci ; 17: 1229275, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37674518

RESUMEN

Orientation detection is an essential function of the visual system. In our previous works, we have proposed a new orientation detection mechanism based on local orientation-selective neurons. We assume that there are neurons solely responsible for orientation detection, with each neuron dedicated to detecting a specific local orientation. The global orientation is inferred from the local orientation information. Based on this mechanism, we propose an artificial visual system (AVS) by utilizing a single-layer of McCulloch-Pitts neurons to realize these local orientation-sensitive neurons and a layer of sum pooling to realize global orientation detection neurons. We demonstrate that such a single-layer perceptron artificial visual system (AVS) is capable of detecting global orientation by identifying the orientation with the largest number of activated orientation-selective neurons as the global orientation. To evaluate the effectiveness of this single-layer perceptron AVS, we perform computer simulations. The results show that the AVS works perfectly for global orientation detection, aligning with the majority of physiological experiments and models. Moreover, we compare the performance of the single-layer perceptron AVS with that of a traditional convolutional neural network (CNN) on orientation detection tasks. We find that the single-layer perceptron AVS outperforms CNN in various aspects, including identification accuracy, noise resistance, computational and learning cost, hardware implementation feasibility, and biological plausibility.

2.
Sci Rep ; 13(1): 12744, 2023 Aug 07.
Artículo en Inglés | MEDLINE | ID: mdl-37550464

RESUMEN

Slime mold algorithm (SMA) is a nature-inspired algorithm that simulates the biological optimization mechanisms and has achieved great results in various complex stochastic optimization problems. Owing to the simulated biological search principle of slime mold, SMA has a unique advantage in global optimization problem. However, it still suffers from issues of missing the optimal solution or collapsing to local optimum when facing complicated problems. To conquer these drawbacks, we consider adding a novel multi-chaotic local operator to the bio-shock feedback mechanism of SMA to compensate for the lack of exploration of the local solution space with the help of the perturbation nature of the chaotic operator. Based on this, we propose an improved algorithm, namely MCSMA, by investigating how to improve the probabilistic selection of chaotic operators based on the maximum Lyapunov exponent (MLE), an inherent property of chaotic maps. We implement the comparison between MCSMA with other state-of-the-art methods on IEEE Congress on Evolution Computation (CEC) i.e., CEC2017 benchmark test suits and CEC2011 practical problems to demonstrate its potency and perform dendritic neuron model training to test the robustness of MCSMA on classification problems. Finally, the parameters' sensitivities of MCSMA, the utilization of the solution space, and the effectiveness of the MLE are adequately discussed.

3.
Comput Intell Neurosci ; 2023: 7037124, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36726357

RESUMEN

Deep learning (DL) has achieved breakthrough successes in various tasks, owing to its layer-by-layer information processing and sufficient model complexity. However, DL suffers from the issues of both redundant model complexity and low interpretability, which are mainly because of its oversimplified basic McCulloch-Pitts neuron unit. A widely recognized biologically plausible dendritic neuron model (DNM) has demonstrated its effectiveness in alleviating the aforementioned issues, but it can only solve binary classification tasks, which significantly limits its applicability. In this study, a novel extended network based on the dendritic structure is innovatively proposed, thereby enabling it to solve multiple-class classification problems. Also, for the first time, an efficient error-back-propagation learning algorithm is derived. In the extensive experimental results, the effectiveness and superiority of the proposed method in comparison with other nine state-of-the-art classifiers on ten datasets are demonstrated, including a real-world quality of web service application. The experimental results suggest that the proposed learning algorithm is competent and reliable in terms of classification performance and stability and has a notable advantage in small-scale disequilibrium data. Additionally, aspects of network structure constrained by scale are examined.


Asunto(s)
Algoritmos , Neuronas , Neuronas/fisiología , Programas Informáticos
4.
IEEE Trans Neural Netw Learn Syst ; 34(4): 2105-2118, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-34487498

RESUMEN

A single dendritic neuron model (DNM) that owns the nonlinear information processing ability of dendrites has been widely used for classification and prediction. Complex-valued neural networks that consist of a number of multiple/deep-layer McCulloch-Pitts neurons have achieved great successes so far since neural computing was utilized for signal processing. Yet no complex value representations appear in single neuron architectures. In this article, we first extend DNM from a real-value domain to a complex-valued one. Performance of complex-valued DNM (CDNM) is evaluated through a complex XOR problem, a non-minimum phase equalization problem, and a real-world wind prediction task. Also, a comparative analysis on a set of elementary transcendental functions as an activation function is implemented and preparatory experiments are carried out for determining hyperparameters. The experimental results indicate that the proposed CDNM significantly outperforms real-valued DNM, complex-valued multi-layer perceptron, and other complex-valued neuron models.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Procesamiento de Señales Asistido por Computador , Algoritmos
5.
Brain Sci ; 12(4)2022 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-35448001

RESUMEN

The Hubel-Wiesel (HW) model is a classical neurobiological model for explaining the orientation selectivity of cortical cells. However, the HW model still has not been fully proved physiologically, and there are few concise but efficient systems to quantify and simulate the HW model and can be used for object orientation detection applications. To realize a straightforward and efficient quantitive method and validate the HW model's reasonability and practicality, we use McCulloch-Pitts (MP) neuron model to simulate simple cells and complex cells and implement an artificial visual system (AVS) for two-dimensional object orientation detection. First, we realize four types of simple cells that are only responsible for detecting a specific orientation angle locally. Complex cells are realized with the sum function. Every local orientation information of an object is collected by simple cells and subsequently converged to the corresponding same type complex cells for computing global activation degree. Finally, the global orientation is obtained according to the activation degree of each type of complex cell. Based on this scheme, an AVS for global orientation detection is constructed. We conducted computer simulations to prove the feasibility and effectiveness of our scheme and the AVS. Computer simulations show that the mechanism-based AVS can make accurate orientation discrimination and shows striking biological similarities with the natural visual system, which indirectly proves the rationality of the Hubel-Wiesel model. Furthermore, compared with traditional CNN, we find that our AVS beats CNN on orientation detection tasks in identification accuracy, noise resistance, computation and learning cost, hardware implementation, and reasonability.

6.
Comput Intell Neurosci ; 2021: 5227377, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34966420

RESUMEN

Microarray gene expression data provide a prospective way to diagnose disease and classify cancer. However, in bioinformatics, the gene selection problem, i.e., how to select the most informative genes from thousands of genes, remains challenging. This problem is a specific feature selection problem with high-dimensional features and small sample sizes. In this paper, a two-stage method combining a filter feature selection method and a wrapper feature selection method is proposed to solve the gene selection problem. In contrast to common methods, the proposed method models the gene selection problem as a multiobjective optimization problem. Both stages employ the same multiobjective differential evolution (MODE) as the search strategy but incorporate different objective functions. The three objective functions of the filter method are mainly based on mutual information. The two objective functions of the wrapper method are the number of selected features and the classification error of a naive Bayes (NB) classifier. Finally, the performance of the proposed method is tested and analyzed on six benchmark gene expression datasets. The experimental results verified that this paper provides a novel and effective way to solve the gene selection problem by applying a multiobjective optimization algorithm.


Asunto(s)
Algoritmos , Biología Computacional , Teorema de Bayes , Expresión Génica , Estudios Prospectivos
7.
IEEE Trans Neural Netw Learn Syst ; 32(11): 5194-5207, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33156795

RESUMEN

An approximate logic neural model (ALNM) is a novel single-neuron model with plastic dendritic morphology. During the training process, the model can eliminate unnecessary synapses and useless branches of dendrites. It will produce a specific dendritic structure for a particular task. The simplified structure of ALNM can be substituted by a logic circuit classifier (LCC) without losing any essential information. The LCC merely consists of the comparator and logic NOT, AND, and OR gates. Thus, it can be easily implemented in hardware. However, the architecture of ALNM affects the learning capacity, generalization capability, computing time and approximation of LCC. Thus, a Pareto-based multiobjective differential evolution (MODE) algorithm is proposed to simultaneously optimize ALNM's topology and weights. MODE can generate a concise and accurate LCC for every specific task from ALNM. To verify the effectiveness of MODE, extensive experiments are performed on eight benchmark classification problems. The statistical results demonstrate that MODE is superior to conventional learning methods, such as the backpropagation algorithm and single-objective evolutionary algorithms. In addition, compared against several commonly used classifiers, both ALNM and LCC are capable of obtaining promising and competitive classification performances on the benchmark problems. Besides, the experimental results also verify that the LCC obtains the faster classification speed than the other classifiers.


Asunto(s)
Algoritmos , Bases de Datos Factuales/normas , Lógica , Redes Neurales de la Computación , Dendritas/fisiología , Humanos , Plasticidad Neuronal/fisiología , Reproducibilidad de los Resultados , Sinapsis/fisiología
8.
Comput Intell Neurosci ; 2020: 2710561, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32405292

RESUMEN

A dendritic neuron model with adaptive synapses (DMASs) based on differential evolution (DE) algorithm training is proposed. According to the signal transmission order, a DNM can be divided into four parts: the synaptic layer, dendritic layer, membrane layer, and somatic cell layer. It can be converted to a logic circuit that is easily implemented on hardware by removing useless synapses and dendrites after training. This logic circuit can be designed to solve complex nonlinear problems using only four basic logical devices: comparators, AND (conjunction), OR (disjunction), and NOT (negation). To obtain a faster and better solution, we adopt the most popular DE for DMAS training. We have chosen five classification datasets from the UCI Machine Learning Repository for an experiment. We analyze and discuss the experimental results in terms of the correct rate, convergence rate, ROC curve, and the cross-validation and then compare the results with a dendritic neuron model trained by the backpropagation algorithm (BP-DNM) and a neural network trained by the backpropagation algorithm (BPNN). The analysis results show that the DE-DMAS shows better performance in all aspects.


Asunto(s)
Algoritmos , Dendritas , Modelos Neurológicos , Redes Neurales de la Computación , Sinapsis , Animales , Humanos
9.
Comput Intell Neurosci ; 2019: 7362931, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31485216

RESUMEN

By employing a neuron plasticity mechanism, the original dendritic neuron model (DNM) has been succeeded in the classification tasks with not only an encouraging accuracy but also a simple learning rule. However, the data collected in real world contain a lot of redundancy, which causes the process of analyzing data by DNM become complicated and time-consuming. This paper proposes a reliable hybrid model which combines a maximum relevance minimum redundancy (Mr2) feature selection technique with DNM (namely, Mr2DNM) for classifying the practical classification problems. The mutual information-based Mr2 is applied to evaluate and rank the most informative and discriminative features for the given dataset. The obtained optimal feature subset is used to train and test the DNM for classifying five different problems arisen from medical, physical, and social scenarios. Experimental results suggest that the proposed Mr2DNM outperforms DNM and other six classification algorithms in terms of accuracy and computational efficiency.


Asunto(s)
Algoritmos , Plasticidad Neuronal/fisiología , Neuronas/fisiología , Células Dendríticas/fisiología , Modelos Biológicos , Máquina de Vectores de Soporte
10.
Int J Neural Syst ; 29(8): 1950012, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31189391

RESUMEN

Neurons are the fundamental units of the brain and nervous system. Developing a good modeling of human neurons is very important not only to neurobiology but also to computer science and many other fields. The McCulloch and Pitts neuron model is the most widely used neuron model, but has long been criticized as being oversimplified in view of properties of real neuron and the computations they perform. On the other hand, it has become widely accepted that dendrites play a key role in the overall computation performed by a neuron. However, the modeling of the dendritic computations and the assignment of the right synapses to the right dendrite remain open problems in the field. Here, we propose a novel dendritic neural model (DNM) that mimics the essence of known nonlinear interaction among inputs to the dendrites. In the model, each input is connected to branches through a distance-dependent nonlinear synapse, and each branch performs a simple multiplication on the inputs. The soma then sums the weighted products from all branches and produces the neuron's output signal. We show that the rich nonlinear dendritic response and the powerful nonlinear neural computational capability, as well as many known neurobiological phenomena of neurons and dendrites, may be understood and explained by the DNM. Furthermore, we show that the model is capable of learning and developing an internal structure, such as the location of synapses in the dendritic branch and the type of synapses, that is appropriate for a particular task - for example, the linearly nonseparable problem, a real-world benchmark problem - Glass classification and the directional selectivity problem.


Asunto(s)
Dendritas , Modelos Neurológicos , Neuronas , Dinámicas no Lineales , Sinapsis , Aprendizaje Automático
11.
Comput Intell Neurosci ; 2018: 9390410, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29606961

RESUMEN

Nowadays, credit classification models are widely applied because they can help financial decision-makers to handle credit classification issues. Among them, artificial neural networks (ANNs) have been widely accepted as the convincing methods in the credit industry. In this paper, we propose a pruning neural network (PNN) and apply it to solve credit classification problem by adopting the well-known Australian and Japanese credit datasets. The model is inspired by synaptic nonlinearity of a dendritic tree in a biological neural model. And it is trained by an error back-propagation algorithm. The model is capable of realizing a neuronal pruning function by removing the superfluous synapses and useless dendrites and forms a tidy dendritic morphology at the end of learning. Furthermore, we utilize logic circuits (LCs) to simulate the dendritic structures successfully which makes PNN be implemented on the hardware effectively. The statistical results of our experiments have verified that PNN obtains superior performance in comparison with other classical algorithms in terms of accuracy and computational efficiency.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos
12.
IEEE/ACM Trans Comput Biol Bioinform ; 15(4): 1365-1378, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-28534784

RESUMEN

The problem of predicting the three-dimensional (3-D) structure of a protein from its one-dimensional sequence has been called the "holy grail of molecular biology", and it has become an important part of structural genomics projects. Despite the rapid developments in computer technology and computational intelligence, it remains challenging and fascinating. In this paper, to solve it we propose a multi-objective evolutionary algorithm. We decompose the protein energy function Chemistry at HARvard Macromolecular Mechanics force fields into bond and non-bond energies as the first and second objectives. Considering the effect of solvent, we innovatively adopt a solvent-accessible surface area as the third objective. We use 66 benchmark proteins to verify the proposed method and obtain better or competitive results in comparison with the existing methods. The results suggest the necessity to incorporate the effect of solvent into a multi-objective evolutionary algorithm to improve protein structure prediction in terms of accuracy and efficiency.


Asunto(s)
Algoritmos , Biología Computacional/métodos , Conformación Proteica , Proteínas , Bases de Datos de Proteínas , Interacciones Hidrofóbicas e Hidrofílicas , Modelos Moleculares , Proteínas/química , Proteínas/genética , Solventes , Agua
13.
Neural Netw ; 60: 96-103, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25170564

RESUMEN

Recent researches have provided strong circumstantial support to dendrites playing a key and possibly essential role in computations. In this paper, we propose an unsupervised learnable neuron model by including the nonlinear interactions between excitation and inhibition on dendrites. The model neuron self-adjusts its synaptic parameters, so that the synapse to dendrite, according to a generalized delta-rule-like algorithm. The model is used to simulate directionally selective cells by the unsupervised learning algorithm. In the simulations, we initialize the interaction and dendrite of the neuron randomly and use the generalized delta-rule-like unsupervised learning algorithm to learn the two-dimensional multi-directional selectivity problem without an external teacher's signals. Simulation results show that the directionally selective cells can be formed by unsupervised learning, acquiring the required number of dendritic branches, and if needed, enhanced and if not, eliminated. Further, the results show whether a synapse exists; if it exists, where and what type (excitatory or inhibitory) of synapse it is. This leads us to believe that the proposed neuron model may be considerably more powerful on computations than the McCulloch-Pitts model because theoretically a single neuron or a single layer of such neurons is capable of solving any complex problem. These may also lead to a completely new technique for analyzing the mechanisms and principles of neurons, dendrites, and synapses.


Asunto(s)
Dendritas/fisiología , Aprendizaje , Modelos Neurológicos , Sinapsis/fisiología , Algoritmos , Simulación por Computador , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...